数据结构和算法 -- 排序算法

往往排序是作为其他算法的预处理算法,其重要性却不容小觑。本篇讲冒泡排序/选择排序/插入排序/希尔排序/归并排序/快速排序/桶排序/计数排序。

概览

algorithm in-place worst average best space complexity remark
bubble yes $N^2$ $N^2$ N $O(1)$ ——
selection yes $N^2$ $N^2$ $N^2$ $O(1)$ ——
insertion yes $N^2$ $N^2$ N $O(1)$ ——
shell yes —– ——- N $O(1)$ ——
merge $NlogN$ $NlogN$ $NlogN$ $O(N)$ ——
quick yes $N^2$ $NlogN$ $NlogN$ $O(logN)$ ——
heap yes $NlogN$ $NlogN$ $NlogN$ $O(1)$ ——

Bubble sort 冒泡排序

冒泡排序的原理非常简单,重复地走访过要排序的数列,一次比较两个元素,如果他们的顺序错误就把他们交换过来。
内部稳定排序,时间复杂度 $O(N^2)$,空间复杂度 $O(1)$

步骤:

  1. 比较相邻的元素。如果前一个比后一个大,就交换他们两个。
  2. 对第 0 个到第 n-1 个数据做同样的工作。这时,最大的数就“浮”到了数组最后的位置上。
  3. 持续每次对越来越少的元素重复上面的步骤,直到没有任何一对数字需要比较。

每轮操作都将一个最大的数“浮”到最后的位置,也就是每轮都有最后 N-i 个数已经排序,所以内循环是从 1 到 N-i。

基本款

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#=======================================================================
# Time Complexity of Solution:
# Best O(n^2); Average O(n^2); Worst O(n^2).
#
# Approach:
# Bubblesort is an elementarray sorting algorithm. The idea is to
# imagine bubbling the smallest elements of a (vertical) array to the
# top; then bubble the next smallest; then so on until the entire
# array is sorted. Bubble sort is worse than both insertion sort and
# selection sort. It moves elements as many times as insertion sort
# (bad) and it takes as long as selection sort (bad). On the positive
# side, bubble sort is easy to understand. Also there are highly
# improved variants of bubble sort.
#=======================================================================
def bubble_sort(array):
n = len(array)
for i in range(n):
for j in range(1, n - i):
if array[j - 1] > array[j]:
array[j - 1], array[j] = array[j], array[j - 1]
return array

优化一

优化1:某一趟遍历如果没有数据交换,则说明已经排好序了,因此不用再进行迭代了。用一个标记记录这个状态即可。

1
2
3
4
5
6
7
8
9
10
11
def bubble_sort2(array):
n = len(array)
for i in range(n):
sorted = True
for j in range(1, n - i):
if array[j - 1] > array[j]:
flag = False
array[j - 1], array[j] = array[j], array[j - 1]
if sorted:
return array # or break
return array

优化二

优化2:记录某次遍历时最后发生数据交换的位置,这个位置之后的数据显然已经有序,不用再排序了。因此通过记录最后发生数据交换的位置就可以确定下次循环的范围了。

1
2
3
4
5
6
7
8
9
10
11
12
13
def bubble_sort3(array):
n = len(array)
k = n
for i in range(n):
flag = False # 有没有交换
for j in range(1, k): # 只遍历到最后交换的位置
if array[j - 1] > array[j]:
flag = True
k = j # 记录最后的交换位置
array[j - 1], array[j] = array[j], array[j - 1]
if not flag:
return array # or break
return array

Selection sort 选择排序

选择排序无疑是最简单直观的排序。
内部不稳定排序,时间复杂度 $O(N^2)$,空间复杂度 $O(1)$

步骤:

  1. 在未排序序列中找到最小(大)元素,存放到排序序列的起始位置。
  2. 再从剩余未排序元素中继续寻找最小(大)元素,然后放到已排序序列的末尾。
    以此类推,直到所有元素均排序完毕。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#=======================================================================
# Time Complexity of Solution:
# Best O(n^2); Average O(n^2); Worst O(n^2).
#
# Approach:
# Selection sort is a step up from insertion sort from a memory
# viewpoint. It only swaps elements that need to be swapped. In terms
# of time complexity, however, insertion sort is better.
#=======================================================================
def select_sort(array):
n = len(array)
for i in range(n):
min = i
for j in range(i + 1, n):
if array[j] < array[min]:
min = j
array[i], array[min] = array[min], array[i]
return array

Insertion sort 插入排序

每轮在已经排好的序列中插入一个新的数字。插入排序的工作原理是,对于每个未排序数据,在已排序序列中从后向前扫描,找到相应位置并插入。
内部稳定排序,时间复杂度 $O(N^2)$,空间复杂度 $O(1)$
步骤:

  1. 从第一个元素开始,该元素可以认为已经被排序
  2. 取出下一个元素,在已经排序的元素序列中从后向前扫描
  3. 如果被扫描的元素(已排序)大于新元素,将该元素后移一位
  4. 重复步骤3,直到找到已排序的元素小于或者等于新元素的位置
  5. 将新元素插入到该位置后
  6. 重复步骤2~5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#=======================================================================
# Time Complexity of Solution:
# Best O(n); Average O(n^2); Worst O(n^2).
#
# Approach:
# Insertion sort is good for collections that are very small
# or nearly sorted. Otherwise it's not a good sorting algorithm:
# it moves data around too much. Each time an insertion is made,
# all elements in a greater position are shifted.
#=======================================================================
def insertion_sort(array):
n = len(array)
for i in range(1, n):
val = array[i]
position = i
while position > 0 and array[position - 1] > val:
array[position] = array[position - 1]
position -= 1
array[position] = val
return array

Shell Sort 希尔排序

希尔排序,也称递减增量排序算法,实质是分组插入排序。
内部非稳定排序算法。时间复杂度不定,空间复杂度$O(1)$

希尔排序的基本思想是:将数组列在一个表中并对列分别进行插入排序,重复这过程,不过每次用更长的列(步长更长了,列数更少了)来进行。最后整个表就只有一列了。将数组转换至表是为了更好地理解这算法,算法本身还是使用数组进行排序。

例如,假设有这样一组数[ 13 14 94 33 82 25 59 94 65 23 45 27 73 25 39 10 ],如果我们以步长为5开始进行排序,我们可以通过将这列表放在有5列的表中来更好地描述算法,这样他们就应该看起来是这样:

13 14 94 33 82
25 59 94 65 23
45 27 73 25 39
10

然后我们对每列进行排序:

10 14 73 25 23
13 27 94 33 39
25 59 94 65 82
45

将上述四行数字,依序接在一起时我们得到:[ 10 14 73 25 23 13 27 94 33 39 25 59 94 65 82 45 ]。这时10已经移至正确位置了,然后再以3为步长进行排序:

10 14 73
25 23 13
27 94 33
39 25 59
94 65 82
45

排序之后变为:

10 14 13
25 23 33
27 25 59
39 65 73
45 94 82
94

最后以1步长进行排序(此时就是简单的插入排序了)。

1
2
3
4
5
6
7
8
9
10
11
12
13
def shell_sort(array):
n = len(array)
gap = n / 2 # 初始步长
while gap > 0:
for i in range(gap, n):# 每一列进行插入排序 , 从gap 到 n-1
position = i
val = array[i]
while position > 0 and array[position - 1] > val:
array[position] = array[position - 1]
position -= 1
array[position] = val
gap = gap / 2 # 重新设置步长
return array

上面源码的步长的选择是从n/2开始,每次再减半,直至为0。步长的选择直接决定了希尔排序的复杂度

Merge Sort 归并排序

归并排序是采用分治法的一个非常典型的应用。归并排序的思想就是先递归分解数组,解决子集的排序问题,再合并两个有序数组。由于基本的逻辑思维结构是二叉树,也叫二路归并排序。

先考虑合并两个有序数组,基本思路是比较两个数组的最前面的数,谁小就先取谁,取了后相应的指针就往后移一位。然后再比较,直至一个数组为空,最后把另一个数组的剩余部分复制过来即可。

再考虑递归分解,基本思路是将数组分解成 left 和 right,如果这两个数组内部数据是有序的,那么就可以用上面合并数组的方法将这两个数组合并排序。如何让这两个数组内部是有序的?可以再二分,直至分解出的小组只含有一个元素时为止,此时认为该小组内部已有序。然后合并排序相邻二个小组即可。

外部稳定排序。时间复杂度 $O(nlog(n))$,空间复杂度 $O(n)$。堆排序和快速排序的复杂度也都是 $O(nlog(n))$,但它们是不稳定的。在排序算法中,时间复杂度是 $O(nlog(n))$ 的排序算法只有归并排序。

基本款

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#=======================================================================
# Time Complexity of Solution:
# Best = Average = Worst = O(nlog(n)).
#
# Approach:
# Merge sort is a divide and conquer algorithm. In the divide and
# conquer paradigm, a problem is broken into pieces where each piece
# still retains all the properties of the larger problem -- except
# its size. To solve the original problem, each piece is solved
# individually; then the pieces are merged back together.
#
# For illustration, imagine needing to sort an array of 200 elements
# using selection sort. Since selection sort takes O(n^2), it would
# take about 40,000 time units to sort the array. Now imagine
# splitting the array into ten equal pieces and sorting each piece
# individually still using selection sort. Now it would take 400
# time units to sort each piece; for a grand total of 4000.
# Once each piece is sorted, merging them back together would take
# about 200 time units; for a grand total of 200+4000 = 4,200.
# Clearly 4,200 is an impressive improvement over 40,000. Now
# imagine greater. Imagine splitting the original array into
# groups of two and then sorting them. In the end, it would take about
# 1,000 time units to sort the array. That's how merge sort works.
#
# NOTE to the Python experts:
# While it might seem more "Pythonic" to take such approach as
#
# mid = len(aList) / 2
# left = mergesort(aList[:mid])
# right = mergesort(aList[mid:])
#
# That approach take too much memory for creating sublists.
#=======================================================================
def merge_sort(array):
if len(array) <= 1:
return array
mid = len(array) / 2
left = merge_sort(array[:mid])
right = merge_sort(array[mid:])
return merge(left, right)
def merge(left, right):
l, r = 0, 0
result = []
while l < len(left) and r < len(right):
if left[l] < right[r]:
result.append(left[l])
l += 1
else:
result.append(right[r])
r += 1
# 如果有遗留没有比较的
result += left[l:]
result += right[r:]
return result

优化

结合其他排序

在数组长度比较短的情况下,不进行递归,而是采用其他排序方案,如 high - low < 50 时,可以用快速/插入排序,适合整体时间最优

建立索引

实际过程中,可能排序的不是 int, 而是一个结构体,在此情况下,可以通过记录数组下标(index)来代替申请新内存空间,从而避免 A 和辅助数组间的频繁数据移动。

应用

归并排序非常适合做外排序(external sorting)。

例1: 9 路归并排序

用 100M 内存对 900M 数据进行排序

  • 读入 100M 数据至内存,用常规方式(堆排序)排序
  • 将排序后的数据写入磁盘
  • 重复前两个步骤,得到 9 个 100M 的文件块。
  • 将 100M 内存划分为 10 块,前 9 份为输入缓冲区,最后一个为输出缓冲区。如将 9 个 100M 的文件块每个分前 10M 放到输入缓冲区,然后同时指向第一个元素,把最小的那个放到输出缓冲区,然后指针后移一位。
  • 执行 9 路归并算法,将结果输出到输出缓冲区。
    • 输出缓冲区满,写入目标文件,清空缓冲区
    • 输入缓冲区空,读入相应文件的下一份数据。

例2: 逆序数问题

给定一个数组 A[0..N-1],如果对于两个元素 a[i],a[j],有 ia[i],那么称 a[i],a[j] 为逆序对,一个数组中包含的逆序对的数目为逆序数。如 3,10,2,6 的逆序数为 3。如何求数组的逆序数?

当然可以选择暴力求解,两个循环,对每个数都要扫描它前面的所有数,时间复杂度为 $O(N^2)$
i -> [0,N-1]
j -> [i+1,N-1]

这里也可以用归并排序的思想。比如观察归并排序——合并数列(1,3,5)与(2,4):

  1. 先取出前面数列中的1。
  2. 然后取出后面数列中的2,明显这个2和前面的3,5都可以组成逆序数对即3和2,5和2都是逆序数对。
  3. 然后取出前面数列中的3。
  4. 然后取出后面数列中的4,同理,可知这个4和前面数列中的5可以组成一个逆序数对。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
def merge_sort(array):
if len(array) <= 1:
return array
mid = len(array) / 2
left = merge_sort(array[:mid])
right = merge_sort(array[mid:])
count = 0
return merge(left, right)
count = 0
def merge(left, right):
global count
l, r = 0, 0
result = []
while l < len(left) and r < len(right):
if left[l] <= right[r]:
result.append(left[l])
l += 1
else:
result.append(right[r])
r += 1
count += len(left) - l
result += left[l:]
result += right[r:]
return result
print count # count 即为逆序数

其他思考

思考:原地排序?让空间复杂度为 $O(1)$

Quick sort 快速排序

快速排序通常明显比同为Ο(nlogn)的其他算法更快,因此常被采用,而且快排采用了分治法的思想,所以在很多笔试面试中能经常看到快排的影子。可见掌握快排的重要性。
内部不稳定排序,最差时间复杂度 $O(N^2)$,平均时间复杂度 $O(nlogn)$
步骤:

  1. 从数列中挑出一个元素作为基准数。
  2. 分区过程,将比基准数大的放到右边,小于或等于它的数都放到左边。
  3. 再对左右区间递归执行第二步,直至各区间只有一个数。

基本款

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#=======================================================================
# Time Complexity of Solution:
# Best = Average = O(nlog(n)); Worst = O(n^2).
#
# Approach:
# Quicksort is admirably known as the algorithm that sorts an array
# while preparing to sort it. For contrast, recall that merge sort
# start partitions an array into smaller pieces, then sorts each piece,
# then merge the pieces back. Quicksort actually sorts the array
# during the partition phase.
#
# Quicksort works by selecting an element called a pivot and splitting
# the array around that pivot such that all the elements in, say, the
# left sub-array are less than pivot and all the elements in the right
# sub-array are greater than pivot. The splitting continues until the
# array can no longer be broken into pieces. That's it. Quicksort is
# done.
#
# All this fussing about quicksort sorting while preparing to sort
# may give the impression that it is better than mergesort, but its
# not. In practice their time complexity is about the same -- with
# one funny exception. Because quicksort picks its pivot randomly,
# there is a practically impossible possibility that the algorithm
# may take O(n^2) to compute.
#
# The aforementioned notwithstanding, quicksort is better than
# mergesort if you consider memory usage. Quicksort is an in-place
# algorithm, requiring no additional storage to work.
#=======================================================================
def quick_sort(array):
if len(array) < 2: return array
lesser = quick_sort([x for x in array[1:] if x <= array[0]])
bigger = quick_sort([x for x in array[1:] if x > array[0]])
return sum([lesser, [array[0]], bigger], [])

上面的代码选择第一个数作为基准数。

应用

正整数数字序列,求最大 K 个数。
输入项:一个无序的数字序列,和一个数字 K
输出项:K 个数字,代表最大的 K 个数字是什么
逻辑:将无序数列插入到二叉排序数中,采用中序遍历的方式输出前 K 个数字。

Heap sort 堆排序

堆排序在 top K 问题中使用比较频繁。堆排序是采用二叉堆的数据结构来实现的,虽然实质上还是一维数组。二叉堆是一个近似完全二叉树 。
内部不稳定排序,时间复杂度 $O(nlogn)$,空间复杂度 $O(1)$

步骤:

  1. 构造最大堆(Build_Max_Heap):若数组下标范围为0~n,考虑到单独一个元素是最大堆,则从下标n/2开始的元素均为最大堆。于是只要从n/2-1开始,向前依次构造最大堆,这样就能保证,构造到某个节点时,它的左右子树都已经是最大堆。
  2. 堆排序(HeapSort):由于堆是用数组模拟的。得到一个最大堆后,数组内部并不是有序的。因此需要将堆化数组有序化。思想是移除根节点,并做最大堆调整的递归运算。第一次将heap[0]与heap[n-1]交换,再对heap[0…n-2]做最大堆调整。第二次将heap[0]与heap[n-2]交换,再对heap[0…n-3]做最大堆调整。重复该操作直至heap[0]和heap[1]交换。由于每次都是将最大的数并入到后面的有序区间,故操作完后整个数组就是有序的了。
  3. 最大堆调整(Max_Heapify):该方法是提供给上述两个过程调用的。目的是将堆的末端子节点作调整,使得子节点永远小于父节点。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#=======================================================================
# Time Complexity of Solution:
# Best O(nlog(n)); Average O(nlog(n)); Worst O(nlog(n)).
#
# Approach:
# Heap sort happens in two phases. In the start phase, the array
# is transformed into a heap. A heap is a binary tree where
# 1) each node is greater than each of its children
# 2) the tree is perfectly balanced
# 3) all leaves are in the leftmost position available.
# In phase two the heap is continuously reduced to a sorted array:
# 1) while the heap is not empty
# - remove the top of the head into an array
# - fix the heap.
# Heap sort was invented by John Williams not by B. R. Heap.
#
# MoveDown:
# The movedown method checks and verifies that the structure is a heap.
#
# Technical Details:
# A heap is based on an array just as a hashmap is based on an
# array. For a heap, the children of an element n are at index
# 2n+1 for the left child and 2n+2 for the right child.
#
# The movedown function checks that an element is greater than its
# children. If not the values of element and child are swapped. The
# function continues to check and swap until the element is at a
# position where it is greater than its children.
#=======================================================================
def heap_sort(array):
# convert aList to heap 构造最大堆
n = len(array)
leastParent = n / 2 - 1 # n的父节点下标
for i in range(leastParent, -1, -1):
max_heapify(array, i, n - 1) # 小堆转化为最大堆
# flatten heap into sorted array 将最大堆转化为有序数组
for i in range(n - 1, 0, -1):
if array[0] > array[i]:
array[i], array[0] = array[0], array[i]
max_heapify(array, 0, i - 1) # 调整最大堆
return array
# 最大堆调整:将堆的末端子节点作调整,使得子节点永远小于父节点
# start 为当前需要调整最大堆的位置,end为调整边界
def max_heapify(array, start, end):
largest = 2 * start + 1 # consider left child is larger than right
while largest <= end:
# right child exists and is larger than left child
if largest < end and array[largest] < array[largest + 1]:
largest += 1
# right child is larger than parent
if array[largest] > array[start]:
array[largest], array[start] = array[start], array[largest]
# move down to largest child
start = largest
largest = 2 * start + 1
else:
return

Bucket Sort 桶排序

桶排序和归并排序非常类似,也使用了归并的思想。大致步骤如下:
外部排序,稳定性取决于桶内排序算法。时间复杂度与分桶数量 K 有关
步骤:

  1. 设置一个定量的数组当作空桶。桶排序的特点就是数据要有范围(桶不能无限多)。
  2. Divide - 从待排序数组中取出元素,将元素按照一定的规则塞进对应的桶子去。
  3. 对每个非空桶进行排序,通常可在塞元素入桶时进行插入排序。
  4. Conquer - 从非空桶把元素再放回原来的数组中。”

假设输入数据服从均匀分布,然后将输入数据均匀地分配到有限数量的桶中,然后对每个桶再分别排序,对每个桶再使用插入排序算法,最后将每个桶中的数据有序的组合起来。前面了解到基数排序假设输入数据属于一个小区间内的整数,而桶排序则是假设输入是由一个随机过程生成,该过程将元素均匀的分布在一个区间[a,b]上。由于桶排序和计数排序一样均对输入的数据进行了某些假设限制,因此比一般的基于比较的排序算法复杂度低。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
#=======================================================================
# Time Complexity of Solution:
# Best Case O(n); Average Case O(n); Worst Case O(n).
#
# Approach:
# If it sounds too good to be true, then most likely it's not true.
# Bucketsort is not an exception to this adage. For bucketsort to
# work at its blazing efficiency, there are multiple prerequisites.
# First the hash function that is used to partition the elements need
# to be very good and must produce ordered hash: if i < k then
# hash(i) < hash(k). Second, the elements to be sorted must be
# uniformly distributed.
#
# The aforementioned aside, bucket sort is actually very good
# considering that counting sort is reasonably speaking its upper
# bound. And counting sort is very fast. The particular distinction
# for bucket sort is that it uses a hash function to partition the
# keys of the input array, so that multiple keys may hash to the same
# bucket. Hence each bucket must effectively be a growable list;
# similar to radix sort.
#
# Numerous Internet sites, including university pages, have
# erroneously written counting sort code and call them bucket sort.
# Bucket sort uses a hash function to distribute keys; counting sort
# creates a bucket for each key. Indeed there are perhaps greater
# similarities between radix sort and bucket sort, than there are
# between counting sort and bucket sort.
#
# In the presented program insertionsort is used to sort
# each bucket. This is to inculcate that the bucket sort algorithm
# does not specify which sorting technique to use on the buckets.
# A programmer may choose to continuously use bucket sort on each
# bucket until the collection is sorted (in the manner of the radix
# sort program below). Whichever sorting method is used on the
# buckets, bucket sort still tends toward O(n).
#=======================================================================
def bucket_sort(array):
# get hash codes
code = hashing(array)
# number of buckets: math.sqrt(len(array))
buckets = [list() for _ in range(code[1])]
# distribute data into buckets: O(n)
for i in array:
x = re_hashing(i, code)
buck = buckets[x]
buck.append(i)
# Sort each bucket: O(n).
# I mentioned above that the worst case for bucket sort is counting
# sort. That's because in the worst case, bucket sort may end up
# with one bucket per key. In such case, sorting each bucket would
# take 1^2 = O(1). Even after allowing for some probabilistic
# variance, to sort each bucket would still take 2-1/n, which is
# still a constant. Hence, sorting all the buckets takes O(n).
for bucket in buckets:
insertion_sort(bucket)
ndx = 0
# merge the buckets: O(n)
for i in range(len(buckets)):
print buckets[i]
for v in buckets[i]:
array[ndx] = v
ndx += 1
return array
import math
def hashing(array):
m = array[0]
for i in range(1, len(array)):
if(m < array[i]):
m = array[i]
result = [m, int(math.sqrt(len(array)))]
print result
return result
def re_hashing(i, code):
# 桶是从小到大排的
return int(i / code[0] * (code[1] - 1))

Counting Sort 计数排序

桶的个数=待排序个数,就是计数排序,是桶排序的特例。
计数排序,顾名思义,就是对待排序数组按元素进行计数。使用前提是需要先知道待排序数组的元素范围,将这些一定范围的元素置于新数组中,新数组的大小为待排序数组中最大元素与最小元素的差值。

本质是空间换时间,空间复杂度 O(max-min)。本质是哈希过程
前提:数据是 int 值

步骤:

  1. 定新数组大小——找出待排序的数组中最大和最小的元素
  2. 统计次数——统计数组中每个值为i的元素出现的次数,存入新数组C的第i项
  3. 对统计次数逐个累加——对所有的计数累加(从C中的第一个元素开始,每一项和前一项相加)
  4. 反向填充目标数组——将每个元素i放在新数组的第C(i)项,每放一个元素就将C(i)减去1
  5. 其中反向填充主要是为了避免重复元素落入新数组的同一索引处。

参考链接:
https://www.bittiger.io/blog/post/4Q4iNNbRYXkWkrAM3#quicksort

徐阿衡 wechat
欢迎关注:徐阿衡的微信公众号
客官,打个赏呗~